1,631 research outputs found

    Image Segmentation Using Weak Shape Priors

    Full text link
    The problem of image segmentation is known to become particularly challenging in the case of partial occlusion of the object(s) of interest, background clutter, and the presence of strong noise. To overcome this problem, the present paper introduces a novel approach segmentation through the use of "weak" shape priors. Specifically, in the proposed method, an segmenting active contour is constrained to converge to a configuration at which its geometric parameters attain their empirical probability densities closely matching the corresponding model densities that are learned based on training samples. It is shown through numerical experiments that the proposed shape modeling can be regarded as "weak" in the sense that it minimally influences the segmentation, which is allowed to be dominated by data-related forces. On the other hand, the priors provide sufficient constraints to regularize the convergence of segmentation, while requiring substantially smaller training sets to yield less biased results as compared to the case of PCA-based regularization methods. The main advantages of the proposed technique over some existing alternatives is demonstrated in a series of experiments.Comment: 27 pages, 8 figure

    Solution Methodologies for the Smallest Enclosing Circle Problem

    Get PDF
    Given a set of circles C = {c₁, ..., cn}on the Euclidean plane with centers {(a₁, b₁), ..., (an, bn)}and radii {r₁..., r<n},the smallest enclosing circle (of fixed circles) problem is to ï¬nd the circle of minimum radius that encloses all circles in C. We survey four known approaches for this problem, including a second order cone reformulation, a subgradient approach, a quadratic programming scheme, and a randomized incremental algorithm. For the last algorithm we also give some implementation details. It turns out the quadratic programming scheme outperforms the other three in our computational experiment.Singapore-MIT Alliance (SMA

    An Information Tracking Approach to the Segmentation of Prostates in Ultrasound Imaging

    Get PDF
    Outlining of the prostate boundary in ultrasound images is a very useful procedure performed and subsequently used by clinicians. The contribution of the resulting segmentation is twofold. First of all, the segmentation of the prostate glands can be used to analyze the size, geometry, and volume of the gland. Such analysis is useful as it is known that the former quantities used in conjunction with a PSA blood test can be used as an indicator of malignancy in the gland itself. The second purpose of accurate segmentation is for treatment planning purposes. In brachetherapy, commonly used to treat localized prostate cancer, the accurate location of the prostate must be found so that the radioactive seeds can be placed precisely in the malignant regions. Unfortunately, the current method of segmentation of ultrasound images is performed manually by expert radiologists. Due to the abundance of ultrasound data, the process of manual segmentation can be extremely time consuming and inefficient. A much more desirable way to perform the segmentation process is through automatic procedures, which should be able to accurately and efficiently extract the boundary of the prostate gland with minimal user intervention. This is the ultimate goal of the proposed approach. The proposed segmentation algorithm uses a probability distribution tracking framework to accurately and efficiently perform the task at hand. The basis for this methodology is to extract image and shape features from available manually segmented ultrasound images for which the actual prostate region is known. Then, the segmentation algorithm seeks a region in new ultrasound images whose features closely mirror the learned features of known prostate regions. Promising results were achieved using this method in a series of in silico and in vivo experiments

    Direct Doppler broadening in Monte Carlo simulations using the multipole representation

    Get PDF
    A new approach for direct Doppler broadening of nuclear data in Monte Carlo simulations is proposed based on the multipole representation. The multipole representation transforms resonance parameters into a set of poles and residues only some of which exhibit a resonant behavior. A method is introduced to approximate the contribution to the background cross section in an effort to reduce the number of poles needing to be broadened. The multipole representation results in memory savings of 1–2 orders of magnitude over comparable techniques. This approach provides a simple way of computing nuclear data at any temperature which is essential for multi-physics calculations, while having a minimal memory footprint which is essential for scalable high performance computing. The concept is demonstrated on two major isotopes of uranium (U-235 and U-238) and implemented in the OpenMC code. Two LEU critical experiments were solved and showed great accuracy with a small loss of efficiency (10–30%) over a single-temperature pointwise library.United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357

    Scalable production of iPSC-derived human neurons to identify tau-lowering compounds by high-content screening

    Get PDF
    Lowering total tau levels is an attractive therapeutic strategy for Alzheimer's disease and other tauopathies. High-throughput screening in neurons derived from human induced pluripotent stem cells (iPSCs) is a powerful tool to identify tau-targeted therapeutics. However, such screens have been hampered by heterogeneous neuronal production, high cost and low yield, and multi-step differentiation procedures. We engineered an isogenic iPSC line that harbors an inducible neurogenin 2 transgene, a transcription factor that rapidly converts iPSCs to neurons, integrated at the AAVS1 locus. Using a simplified two-step protocol, we differentiated these iPSCs into cortical glutamatergic neurons with minimal well-to-well variability. We developed a robust high-content screening assay to identify tau-lowering compounds in LOPAC and identified adrenergic receptors agonists as a class of compounds that reduce endogenous human tau. These techniques enable the use of human neurons for high-throughput screening of drugs to treat neurodegenerative disease

    High-energy magnetic excitations from heavy quasiparticles in CeCu2_2Si2_2

    Get PDF
    Magnetic fluctuations is the leading candidate for pairing in cuprate, iron-based and heavy fermion superconductors. This view is challenged by the recent discovery of nodeless superconductivity in CeCu2_2Si2_2, and calls for a detailed understanding of the corresponding magnetic fluctuations. Here, we mapped out the magnetic excitations in \ys{superconducting (S-type)} CeCu2_2Si2_2 using inelastic neutron scattering, finding a strongly asymmetric dispersion for Eâ‰Č1.5E\lesssim1.5~meV, which at higher energies evolve into broad columnar magnetic excitations that extend to E≳5E\gtrsim 5 meV. While low-energy magnetic excitations exhibit marked three-dimensional characteristics, the high-energy magnetic excitations in CeCu2_2Si2_2 are almost two-dimensional, reminiscent of paramagnons found in cuprate and iron-based superconductors. By comparing our experimental findings with calculations in the random-phase approximation,we find that the magnetic excitations in CeCu2_2Si2_2 arise from quasiparticles associated with its heavy electron band, which are also responsible for superconductivity. Our results provide a basis for understanding magnetism and superconductivity in CeCu2_2Si2_2, and demonstrate the utility of neutron scattering in probing band renormalization in heavy fermion metals

    Homogeneous low-molecular-weight heparins with reversible anticoagulant activity

    Get PDF
    Low-molecular-weight heparins (LMWHs) are carbohydrate-based anticoagulants clinically used to treat thrombotic disorders, but impurities, structural heterogeneity or functional irreversibility can limit treatment options. We report a series of synthetic LMWHs prepared by cost-effective chemoenzymatic methods. The high activity of one defined synthetic LMWH against human factor Xa (FXa) was reversible in vitro and in vivo using protamine, demonstrating that synthetically accessible constructs can have a critical role in the next generation of LMWHs

    Determination of disease severity in COVID-19 patients using deep learning in chest X-ray images

    Get PDF
    PURPOSEChest X-ray plays a key role in diagnosis and management of COVID-19 patients and imaging features associated with clinical elements may assist with the development or validation of automated image analysis tools. We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak.METHODSA retrospective study of X-rays, clinical, and laboratory data was performed from 48 SARS-CoV-2 RT-PCR positive patients (age 60±17 years, 15 women) between February 22 and March 6, 2020 from a tertiary care hospital in Milan, Italy. Sixty-five chest X-rays were reviewed by two radiologists for alveolar and interstitial opacities and classified by severity on a scale from 0 to 3. Clinical factors (age, symptoms, comorbidities) were investigated for association with opacity severity and also with placement of central line or endotracheal tube. Deep learning models were then trained for two tasks: lung segmentation and opacity detection. Imaging characteristics were compared to clinical datapoints using the unpaired student’s t-test or Mann-Whitney U test. Cohen’s kappa analysis was used to evaluate the concordance of deep learning to conventional radiologist interpretation.RESULTSFifty-six percent of patients presented with alveolar opacities, 73% had interstitial opacities, and 23% had normal X-rays. The presence of alveolar or interstitial opacities was statistically correlated with age (P = 0.008) and comorbidities (P = 0.005). The extent of alveolar or interstitial opacities on baseline X-ray was significantly associated with the presence of endotracheal tube (P = 0.0008 and P = 0.049) or central line (P = 0.003 and P = 0.007). In comparison to human interpretation, the deep learning model achieved a kappa concordance of 0.51 for alveolar opacities and 0.71 for interstitial opacities.CONCLUSIONChest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. Artificial intelligence tools based upon deep learning of COVID-19 chest X-rays are feasible in the acute outbreak setting
    • 

    corecore